Subcutaneous Vein Recognition System Using Deep Learning for Intravenous (IV) Access Procedure
نویسندگان
چکیده
Intravenous(IV) access is animportant daily clinical procedure that delivers fluids or medication into a patient’s vein. However, IV insertion very challenging where clinicians aresuffering in locating the subcutaneous vein due to patients’physiological factorssuch as hairy forearm and thick dermis fat, alsomedical staff’s level of fatigue. To resolve this issue, researchers have proposed autonomous machines be usedfor access,but such equipmentarelackingcapabilityin detecting accurately. Therefore, project proposes an automatic veindetection algorithm using deep learning for IVaccess purpose. U-Net, fully connected network (FCN) architecture employed thisproject its capability near-infrared (NIR)subcutaneous Data augmentation applied toincrease dataset size reduce bias from overfitting. The original U-Net optimized by replacing up-sampling with transpose convolution well theadditional implementation ofbatch normalizationbesides reducing number oflayers diminish risk overfitting.After fine-tuning retraining hypermodel, unsupervised used evaluate hypermodel selecting 10 checkpoints foreach image comparing onpredicted outputs determine true positive pixels. lightweight has achieved slightly lower accuracy (0.8871) than architecture. Even so, sensitivity, specificity, precisionare greatly improved byachieving 0.7806, 0.9935,and 0.9918 respectively. This result indicates can venipuncture machine toaccurately locate intravenous (IV)procedures.
منابع مشابه
Named Entity Recognition in Persian Text using Deep Learning
Named entities recognition is a fundamental task in the field of natural language processing. It is also known as a subset of information extraction. The process of recognizing named entities aims at finding proper nouns in the text and classifying them into predetermined classes such as names of people, organizations, and places. In this paper, we propose a named entity recognizer which benefi...
متن کاملEmotion Recognition Using Multimodal Deep Learning
To enhance the performance of affective models and reduce the cost of acquiring physiological signals for real-world applications, we adopt multimodal deep learning approach to construct affective models with SEED and DEAP datasets to recognize different kinds of emotions. We demonstrate that high level representation features extracted by the Bimodal Deep AutoEncoder (BDAE) are effective for e...
متن کاملSpoken Emotion Recognition Using Deep Learning
Spoken emotion recognition is a multidisciplinary research area that has received increasing attention over the last few years. In this paper, restricted Boltzmann machines and deep belief networks are used to classify emotions in speech. The motivation lies in the recent success reported using these alternative techniques in speech processing and speech recognition. This classifier is compared...
متن کاملPollen Grain Recognition Using Deep Learning
Pollen identification helps forensic scientists solve elusive crimes, provides data for climate-change modelers, and even hints at potential sites for petroleum exploration. Despite its wide range of applications, most pollen identification is still done by time-consuming visual inspection by well-trained experts. Although partial automation is currently available, automatic pollen identificati...
متن کاملFacial Emotion Recognition using Deep Learning
Facial emotion recognition is one of the most important cognitive functions that our brain performs quite efficiently. State of the art facial emotion recognition techniques are mostly performance driven and do not consider the cognitive relevance of the model. This project is an attempt to look at the task of emotion recognition using deep belief networks which is cognitively very appealing an...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
ژورنال
عنوان ژورنال: International Journal of Integrated Engineering
سال: 2023
ISSN: ['2229-838X', '2600-7916']
DOI: https://doi.org/10.30880/ijie.2023.15.03.007